The new EricMartindale.com is an experiment in data aggregation, and might have a few bugs. Feel free to explore, and then provide feedback directly to @martindale.
My list of entrepreneurial shortcomings includes Mirascape, which aspired to be a ubiquitous augmented reality (AR) operating system for the real world. The problems we were solving (and our packaged solution) would have been the backbone for all of the [imagined] technology you see in this Samsung promo video for a new tech they're excited about, transparent and flexible OLED displays. [1]
If you're not familiar with augmented reality; it is the visual overlay of otherwise hidden information on the real world, as you observe it.
While you can ogle over ostentatious technologies like the embedded-display contact lenses the University of Washington is so proud of [2], it's exciting to see companies like TDK [3] and Laster Technologies [4] bring these kinds of stepping stone technology to bear. We can all download and install the awkward and barely applicable consumer-level AR applications on our smartphones ([5], [6], and [7]), but they will all remain novelty applications until we see major innovation in the display space.
One of the more practical examples I've seen of augmented reality in the real world is WordLens [8] (sadly only available for iOS), which provides instantaneous video translation through your device. It's not hard to imagine a pair of Oakley glasses with this display technology built-in, providing you with always-on translation while in an unfamiliar foreign location. Or perhaps even displaying your friend's tweet as a speech bubble above their head for a few seconds -- imagine if it were built right, how amazing it could be.
I genuinely hope to see more of this transparent display technology built in to more consumer-level products, and eyewear in particular. We need a lot more developers playing with the practical applications of augmented reality, and not just displaying compass-aligned markers over a geotagged Wikipedia article or Flickr photo. The high-power hardware necessary to do real-time computer vision processing is coming, and the applied software world needs to be ready for it.
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
My list of entrepreneurial shortcomings includes Mirascape, which aspired to be a ubiquitous augmented reality (AR) operating system for the real world. The problems we were solving (and our packaged solution) would have been the backbone for all of the [imagined] technology you see in this Samsung promo video for a new tech they're excited about, transparent and flexible OLED displays. [1]
If you're not familiar with augmented reality; it is the visual overlay of otherwise hidden information on the real world, as you observe it.
While you can ogle over ostentatious technologies like the embedded-display contact lenses the University of Washington is so proud of [2], it's exciting to see companies like TDK [3] and Laster Technologies [4] bring these kinds of stepping stone technology to bear. We can all download and install the awkward and barely applicable consumer-level AR applications on our smartphones ([5], [6], and [7]), but they will all remain novelty applications until we see major innovation in the display space.
One of the more practical examples I've seen of augmented reality in the real world is WordLens [8] (sadly only available for iOS), which provides instantaneous video translation through your device. It's not hard to imagine a pair of Oakley glasses with this display technology built-in, providing you with always-on translation while in an unfamiliar foreign location. Or perhaps even displaying your friend's tweet as a speech bubble above their head for a few seconds -- imagine if it were built right, how amazing it could be.
I genuinely hope to see more of this transparent display technology built in to more consumer-level products, and eyewear in particular. We need a lot more developers playing with the practical applications of augmented reality, and not just displaying compass-aligned markers over a geotagged Wikipedia article or Flickr photo. The high-power hardware necessary to do real-time computer vision processing is coming, and the applied software world needs to be ready for it.
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
My list of entrepreneurial shortcomings includes Mirascape, which aspired to be a ubiquitous augmented reality (AR) operating system for the real world. The problems we were solving (and our packaged solution) would have been the backbone for all of the [imagined] technology you see in this Samsung promo video for a new tech they're excited about, transparent and flexible OLED displays. [1]
If you're not familiar with augmented reality; it is the visual overlay of otherwise hidden information on the real world, as you observe it.
While you can ogle over ostentatious technologies like the embedded-display contact lenses the University of Washington is so proud of [2], it's exciting to see companies like TDK [3] and Laster Technologies [4] bring these kinds of stepping stone technology to bear. We can all download and install the awkward and barely applicable consumer-level AR applications on our smartphones ([5], [6], and [7]), but they will all remain novelty applications until we see major innovation in the display space.
One of the more practical examples I've seen of augmented reality in the real world is WordLens [8] (sadly only available for iOS), which provides instantaneous video translation through your device. It's not hard to imagine a pair of Oakley glasses with this display technology built-in, providing you with always-on translation while in an unfamiliar foreign location. Or perhaps even displaying your friend's tweet as a speech bubble above their head for a few seconds -- imagine if it were built right, how amazing it could be.
I genuinely hope to see more of this transparent display technology built in to more consumer-level products, and eyewear in particular. We need a lot more developers playing with the practical applications of augmented reality, and not just displaying compass-aligned markers over a geotagged Wikipedia article or Flickr photo. The high-power hardware necessary to do real-time computer vision processing is coming, and the applied software world needs to be ready for it.
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
My list of entrepreneurial shortcomings includes Mirascape, which aspired to be a ubiquitous augmented reality (AR) operating system for the real world. The problems we were solving (and our packaged solution) would have been the backbone for all of the [imagined] technology you see in this Samsung promo video for a new tech they're excited about, transparent and flexible OLED displays. [1]
If you're not familiar with augmented reality; it is the visual overlay of otherwise hidden information on the real world, as you observe it.
While you can ogle over ostentatious technologies like the embedded-display contact lenses the University of Washington is so proud of [2], it's exciting to see companies like TDK [3] and Laster Technologies [4] bring these kinds of stepping stone technology to bear. We can all download and install the awkward and barely applicable consumer-level AR applications on our smartphones ([5], [6], and [7]), but they will all remain novelty applications until we see major innovation in the display space.
One of the more practical examples I've seen of augmented reality in the real world is WordLens [8] (sadly only available for iOS), which provides instantaneous video translation through your device. It's not hard to imagine a pair of Oakley glasses with this display technology built-in, providing you with always-on translation while in an unfamiliar foreign location. Or perhaps even displaying your friend's tweet as a speech bubble above their head for a few seconds -- imagine if it were built right, how amazing it could be.
I genuinely hope to see more of this transparent display technology built in to more consumer-level products, and eyewear in particular. We need a lot more developers playing with the practical applications of augmented reality, and not just displaying compass-aligned markers over a geotagged Wikipedia article or Flickr photo. The high-power hardware necessary to do real-time computer vision processing is coming, and the applied software world needs to be ready for it.
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
Compelling Narratives using Augmented Reality +Google Glass has, for better or worse, shaped the narrative around augmented reality this past year. We've seen the arms (eyes?) race rapidly develop, culminating recently with the +YCombinator-backed +meta announcing their "SpaceGlasses" [1], one of the first truly compelling experiences built around a convincingly capable device [2].
The hardest part of augmented reality is not the hardware, nor the computer vision software—both extremely difficult academic challenges in their own right, and certainly not to be taken by the faint of heart—but in the experience.
These problems will be solved, through no small effort, but they will be solved. The most daunting challenge is to build a compelling story that binds the available data (read "the Internet") to the real world, and exposes it in an unobtrusive and seamless fashion. This too will emerge naturally, but early pioneers in the space need to think carefully about the application of augmented reality in order to succeed; no one wants a world filled with advertisements [3], and in fact—some even try to eliminate them [4].
Here, +Field Trip attempts to builds one such compelling story. The experience of contextual information making itself available without interrupting your interactions with the real world is so tantalizingly close you can feel it, but one wonders just how much control the user will have over the frequency and relevance of the information "popups". In the early days of the software industry (late 60s, early 70s), an ongoing debate between the [then] default of free software vs. closed software unfolded, setting the foundation for today's conversation around open source and free [5] software. I'll be talking more about this in a presentation at the upcoming #RTP180 : Open Source All Things event [6] in North Carolina.
It's another step forward for ubiquitous augmented reality, an exciting one indeed, but one that won't achieve mass adoption until the user can control their own experience [7].
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
Compelling Narratives using Augmented Reality +Google Glass has, for better or worse, shaped the narrative around augmented reality this past year. We've seen the arms (eyes?) race rapidly develop, culminating recently with the +YCombinator-backed +meta announcing their "SpaceGlasses" [1], one of the first truly compelling experiences built around a convincingly capable device [2].
The hardest part of augmented reality is not the hardware, nor the computer vision software—both extremely difficult academic challenges in their own right, and certainly not to be taken by the faint of heart—but in the experience.
These problems will be solved, through no small effort, but they will be solved. The most daunting challenge is to build a compelling story that binds the available data (read "the Internet") to the real world, and exposes it in an unobtrusive and seamless fashion. This too will emerge naturally, but early pioneers in the space need to think carefully about the application of augmented reality in order to succeed; no one wants a world filled with advertisements [3], and in fact—some even try to eliminate them [4].
Here, +Field Trip attempts to builds one such compelling story. The experience of contextual information making itself available without interrupting your interactions with the real world is so tantalizingly close you can feel it, but one wonders just how much control the user will have over the frequency and relevance of the information "popups". In the early days of the software industry (late 60s, early 70s), an ongoing debate between the [then] default of free software vs. closed software unfolded, setting the foundation for today's conversation around open source and free [5] software. I'll be talking more about this in a presentation at the upcoming #RTP180 : Open Source All Things event [6] in North Carolina.
It's another step forward for ubiquitous augmented reality, an exciting one indeed, but one that won't achieve mass adoption until the user can control their own experience [7].
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
Compelling Narratives using Augmented Reality +Google Glass has, for better or worse, shaped the narrative around augmented reality this past year. We've seen the arms (eyes?) race rapidly develop, culminating recently with the +YCombinator-backed +meta announcing their "SpaceGlasses" [1], one of the first truly compelling experiences built around a convincingly capable device [2].
The hardest part of augmented reality is not the hardware, nor the computer vision software—both extremely difficult academic challenges in their own right, and certainly not to be taken by the faint of heart—but in the experience.
These problems will be solved, through no small effort, but they will be solved. The most daunting challenge is to build a compelling story that binds the available data (read "the Internet") to the real world, and exposes it in an unobtrusive and seamless fashion. This too will emerge naturally, but early pioneers in the space need to think carefully about the application of augmented reality in order to succeed; no one wants a world filled with advertisements [3], and in fact—some even try to eliminate them [4].
Here, +Field Trip attempts to builds one such compelling story. The experience of contextual information making itself available without interrupting your interactions with the real world is so tantalizingly close you can feel it, but one wonders just how much control the user will have over the frequency and relevance of the information "popups". In the early days of the software industry (late 60s, early 70s), an ongoing debate between the [then] default of free software vs. closed software unfolded, setting the foundation for today's conversation around open source and free [5] software. I'll be talking more about this in a presentation at the upcoming #RTP180 : Open Source All Things event [6] in North Carolina.
It's another step forward for ubiquitous augmented reality, an exciting one indeed, but one that won't achieve mass adoption until the user can control their own experience [7].
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
Compelling Narratives using Augmented Reality +Google Glass has, for better or worse, shaped the narrative around augmented reality this past year. We've seen the arms (eyes?) race rapidly develop, culminating recently with the +YCombinator-backed +meta announcing their "SpaceGlasses" [1], one of the first truly compelling experiences built around a convincingly capable device [2].
The hardest part of augmented reality is not the hardware, nor the computer vision software—both extremely difficult academic challenges in their own right, and certainly not to be taken by the faint of heart—but in the experience.
These problems will be solved, through no small effort, but they will be solved. The most daunting challenge is to build a compelling story that binds the available data (read "the Internet") to the real world, and exposes it in an unobtrusive and seamless fashion. This too will emerge naturally, but early pioneers in the space need to think carefully about the application of augmented reality in order to succeed; no one wants a world filled with advertisements [3], and in fact—some even try to eliminate them [4].
Here, +Field Trip attempts to builds one such compelling story. The experience of contextual information making itself available without interrupting your interactions with the real world is so tantalizingly close you can feel it, but one wonders just how much control the user will have over the frequency and relevance of the information "popups". In the early days of the software industry (late 60s, early 70s), an ongoing debate between the [then] default of free software vs. closed software unfolded, setting the foundation for today's conversation around open source and free [5] software. I'll be talking more about this in a presentation at the upcoming #RTP180 : Open Source All Things event [6] in North Carolina.
It's another step forward for ubiquitous augmented reality, an exciting one indeed, but one that won't achieve mass adoption until the user can control their own experience [7].
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
No problem, +Keiichi Matsuda! I was glad to find you here on Google+. The last startup I was a part of (Mirascape) was very familiar with it, as we were building an augmented reality platform and used it as reference material.
Have any thoughts on Google's foray into augmented reality? I ran into Johnny Lee from Google [x] at ARE 2011 last year, but didn't find out that they were working on glasses until earlier this year.
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
Google[x] is making their augmented reality glasses project public, and it looks like it's going to connect via Bluetooth to your Android device. Awesome.
It looks like their marketing video is a remake of an older augmented reality concept video by artist +Keiichi Matsuda [1]. What are your thoughts on their (both Google's and Keiichi's!) vision for how these glasses will be used?
Aside: I can now safely laugh and say, "I told you so."
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
Google[x] is making their augmented reality glasses project public, and it looks like it's going to connect via Bluetooth to your Android device. Awesome.
It looks like their marketing video is a remake of an older augmented reality concept video by artist +Keiichi Matsuda [1]. What are your thoughts on their (both Google's and Keiichi's!) vision for how these glasses will be used?
Aside: I can now safely laugh and say, "I told you so."
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
Google[x] is making their augmented reality glasses project public, and it looks like it's going to connect via Bluetooth to your Android device. Awesome.
It looks like their marketing video is a remake of an older augmented reality concept video by artist +Keiichi Matsuda [1]. What are your thoughts on their (both Google's and Keiichi's!) vision for how these glasses will be used?
Aside: I can now safely laugh and say, "I told you so."
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
Google[x] is making their augmented reality glasses project public, and it looks like it's going to connect via Bluetooth to your Android device. Awesome.
It looks like their marketing video is a remake of an older augmented reality concept video by artist +Keiichi Matsuda [1]. What are your thoughts on their (both Google's and Keiichi's!) vision for how these glasses will be used?
Aside: I can now safely laugh and say, "I told you so."
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
Work is work, as per the usual. I'm work at a company called TechNoggins, doing all sorts of various things. Primarily, I am Callcenter Manager, handling calls for three states and eleven major cities. It's unfortunately fairly slow today, which means my salary isn't augmented by the influx of web development work. Sad day.
I've been messing around with some of Facebook's features, recently. I just linked "My Notes" to this blog, which seems like a cool feature, but it needs some work. It imported my posts what seems to be twice?
Someone posted LeekSpin on the Grand Tournament forum. I've been subtly amused by the music to which this has been put, and have been listening to it for just over an hour now. You want to talk about overplaying, hrm? Full immersion, hrrrm?
Well, looks like I have a PC here in the office that I need to fix. So, until later, I'm gone. :P
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
How would you behave in a world with no anonymity?
Researchers from +Yuriy Zubovski's alma mater, Carnegie Mellon University, raise a few troubling questions about identity and privacy in a paper released at BlackHat in August [1]. They show results from a facial recognition study and hit some points about how it relates to Augmented Reality (AR), right up +Robert Rice's alley.
One of the authors, Alessandro Acquisti, also gave a talk at USENIX shortly after the release, of which there is video [2]. He explores some fascinating examples of how the images and videos that people have posted online can be utilized for tracking identity, even in cases where you explicitly "untag" yourself, which may people simply do not consider.
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
How would you behave in a world with no anonymity?
Researchers from +Yuriy Zubovski's alma mater, Carnegie Mellon University, raise a few troubling questions about identity and privacy in a paper released at BlackHat in August [1]. They show results from a facial recognition study and hit some points about how it relates to Augmented Reality (AR), right up +Robert Rice's alley.
One of the authors, Alessandro Acquisti, also gave a talk at USENIX shortly after the release, of which there is video [2]. He explores some fascinating examples of how the images and videos that people have posted online can be utilized for tracking identity, even in cases where you explicitly "untag" yourself, which may people simply do not consider.
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
How would you behave in a world with no anonymity?
Researchers from +Yuriy Zubovski's alma mater, Carnegie Mellon University, raise a few troubling questions about identity and privacy in a paper released at BlackHat in August [1]. They show results from a facial recognition study and hit some points about how it relates to Augmented Reality (AR), right up +Robert Rice's alley.
One of the authors, Alessandro Acquisti, also gave a talk at USENIX shortly after the release, of which there is video [2]. He explores some fascinating examples of how the images and videos that people have posted online can be utilized for tracking identity, even in cases where you explicitly "untag" yourself, which may people simply do not consider.
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
How would you behave in a world with no anonymity?
Researchers from +Yuriy Zubovski's alma mater, Carnegie Mellon University, raise a few troubling questions about identity and privacy in a paper released at BlackHat in August [1]. They show results from a facial recognition study and hit some points about how it relates to Augmented Reality (AR), right up +Robert Rice's alley.
One of the authors, Alessandro Acquisti, also gave a talk at USENIX shortly after the release, of which there is video [2]. He explores some fascinating examples of how the images and videos that people have posted online can be utilized for tracking identity, even in cases where you explicitly "untag" yourself, which may people simply do not consider.
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
[K]r.github.com. Keith Rarick's GitHub redirect. Total ass-kicker.
[L]inkedIn. Pretty straightforward, between hiring for our team at @Mirascape and the travel to and from various conferences and Meetups lately.
[M]irascape. The augmented reality platform I'm responsible for.
[N]oxBot. A nice PHP-powered IRC bot with various plugins. A bit out of date, but very powerful. Been using it for a couple things lately.
[O]K, QR Me!. A QR Code-generating link shortener I built.
[P]ostmark. Best Email delivery service I've used. Nice RESTful API, flat rate for emails sent.
[Q]uora. These guys nail Q&A, and they're doing it pretty well. Check out all their buzz, too. But for some reason, I just don't stick.
Google [R]eader. “From your 1,040 subscriptions, over the last 30 days you read 21,549 items, clicked 274 items, starred 853 items, shared 37 items, and emailed 8 items.” -- </stats>
[S]erver Stats for Mirascape. Powered by Munin, it's how I keep track of the status and metrics of all my servers.
[T]witter. Not surprising. I love their webapp for my personal use, but own and manage at least five accounts using SplitTweet.
[U]serVoice. Pretty sweet tool I use for giving the communities I manage a good way to build a consensus on what they desire most. Examples I run: for RolePlayGateway, and EVE UserVoice for EVE Online.
[X]DA Developers. An indisposable resource for getting rid of carrier-installed crap and running my own choice of software on the hardware I purchased!
[Z]ecco. Where I trade most of my public stocks. :)
Surprisingly populist, and there's a lot of Google-owned properties in there. I'm also using Chromium, so I think it prefers the roots of the sites I visit instead of searching through my history for individual pages.
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
Of Space, War, and Virtual Reality +CCP Games has always been on the right track with their most successful project, the single-server, fully persistent world of +EVE Online. Having launched +DUST 514 as an extension that exists within the same game world, CCP has raised the bar of innovation already, and continues to carve out territory with the announcement of +EVE: Valkyrie.
Valkyrie is a virtual reality game originally designed for +Oculus VR [1]. If you're not familiar with Oculus, they're the makers of the Oculus Rift. The Rift is a 3D display device that you wear like a pair of glasses, and literally move your head to look around the space. This interaction makes it feel like you are actually in the virtual space, and represents a huge step forward in immersion.
What interests me most about EVE is the fact that it's a truly alternative reality; a simulated universe, of which only one exists, and to which there will never be a sequel or a "version 2". CCP has committed to simply improve the game over time rather than ever introduce an "EVE 2", building the universe iteratively over the past 10 years. This means that the time you invest in the alternative reality has a degree of permanence and importance, rather than the transience and fragmentation of other sharded universes.
Let's hope this integrates directly into the EVE Universe, in the capacity of participating in actual fights, and not just "conquer this asteroid field to make it available to EVE players". Anything less would be a grave disappointment and moreover, a potentially critical business mistake as the two games, EVE Online and Valkyrie, operate in the same space.
Either way, I'm looking forward to seeing how this pans out.
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
Of Space, War, and Virtual Reality +CCP Games has always been on the right track with their most successful project, the single-server, fully persistent world of +EVE Online. Having launched +DUST 514 as an extension that exists within the same game world, CCP has raised the bar of innovation already, and continues to carve out territory with the announcement of +EVE: Valkyrie.
Valkyrie is a virtual reality game originally designed for +Oculus VR [1]. If you're not familiar with Oculus, they're the makers of the Oculus Rift. The Rift is a 3D display device that you wear like a pair of glasses, and literally move your head to look around the space. This interaction makes it feel like you are actually in the virtual space, and represents a huge step forward in immersion.
What interests me most about EVE is the fact that it's a truly alternative reality; a simulated universe, of which only one exists, and to which there will never be a sequel or a "version 2". CCP has committed to simply improve the game over time rather than ever introduce an "EVE 2", building the universe iteratively over the past 10 years. This means that the time you invest in the alternative reality has a degree of permanence and importance, rather than the transience and fragmentation of other sharded universes.
Let's hope this integrates directly into the EVE Universe, in the capacity of participating in actual fights, and not just "conquer this asteroid field to make it available to EVE players". Anything less would be a grave disappointment and moreover, a potentially critical business mistake as the two games, EVE Online and Valkyrie, operate in the same space.
Either way, I'm looking forward to seeing how this pans out.
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
Of Space, War, and Virtual Reality +CCP Games has always been on the right track with their most successful project, the single-server, fully persistent world of +EVE Online. Having launched +DUST 514 as an extension that exists within the same game world, CCP has raised the bar of innovation already, and continues to carve out territory with the announcement of +EVE: Valkyrie.
Valkyrie is a virtual reality game originally designed for +Oculus VR [1]. If you're not familiar with Oculus, they're the makers of the Oculus Rift. The Rift is a 3D display device that you wear like a pair of glasses, and literally move your head to look around the space. This interaction makes it feel like you are actually in the virtual space, and represents a huge step forward in immersion.
What interests me most about EVE is the fact that it's a truly alternative reality; a simulated universe, of which only one exists, and to which there will never be a sequel or a "version 2". CCP has committed to simply improve the game over time rather than ever introduce an "EVE 2", building the universe iteratively over the past 10 years. This means that the time you invest in the alternative reality has a degree of permanence and importance, rather than the transience and fragmentation of other sharded universes.
Let's hope this integrates directly into the EVE Universe, in the capacity of participating in actual fights, and not just "conquer this asteroid field to make it available to EVE players". Anything less would be a grave disappointment and moreover, a potentially critical business mistake as the two games, EVE Online and Valkyrie, operate in the same space.
Either way, I'm looking forward to seeing how this pans out.
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
Of Space, War, and Virtual Reality +CCP Games has always been on the right track with their most successful project, the single-server, fully persistent world of +EVE Online. Having launched +DUST 514 as an extension that exists within the same game world, CCP has raised the bar of innovation already, and continues to carve out territory with the announcement of +EVE: Valkyrie.
Valkyrie is a virtual reality game originally designed for +Oculus VR [1]. If you're not familiar with Oculus, they're the makers of the Oculus Rift. The Rift is a 3D display device that you wear like a pair of glasses, and literally move your head to look around the space. This interaction makes it feel like you are actually in the virtual space, and represents a huge step forward in immersion.
What interests me most about EVE is the fact that it's a truly alternative reality; a simulated universe, of which only one exists, and to which there will never be a sequel or a "version 2". CCP has committed to simply improve the game over time rather than ever introduce an "EVE 2", building the universe iteratively over the past 10 years. This means that the time you invest in the alternative reality has a degree of permanence and importance, rather than the transience and fragmentation of other sharded universes.
Let's hope this integrates directly into the EVE Universe, in the capacity of participating in actual fights, and not just "conquer this asteroid field to make it available to EVE players". Anything less would be a grave disappointment and moreover, a potentially critical business mistake as the two games, EVE Online and Valkyrie, operate in the same space.
Either way, I'm looking forward to seeing how this pans out.
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
+Joseph Coco I don't think you are being honest with the reality of what a business would have to have set up to take bitcoin.
It is naive to think they can just manage it themselves, especially with the ever changing value, and varying international values, just like regular credit payment services have to deal with.
I think pro-bitcoin bias is coloring your responses. My research on it as someone who considered using them, paints a lot different picture than you are presenting.
I'm no anti-bitcoin. I'm all for all sorts of varieties of currency and transactions. But I want the truth and facts to be said about them.
I also notice everyone is ducking the security and backing issue.
Banks are accredited and insured to protect their customers. There isn't any bitcoin handler that has that. Not even Square Marketplace.
If you want people to use them, great. If you want to use them yourself, great. But don't blow smoke up peoples.... trying to cover real negatives about them.
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
On the Ongoing Attacks between China, U.S., Russia, Israel, etc.… The latest round of evidence of ongoing digital warfare between the superpowers is now being reported in the N.Y. Times [1] after an undeniably incriminating 60-page report on the Chinese attacks on the U.S. by security firm Mandiant [2].
“Either they are coming from inside Unit 61398, or the people who run the most-controlled, most-monitored Internet networks in the world are clueless about thousands of people generating attacks from this one neighborhood.” — Kevin Mandia
The report goes on to track individual participants in the attack, tracing them back to the headquarters of P.L.A. Unit 61398.
Attacks from the Chinese have been ongoing for many years, notably back to Operation Titan Rain [3] in 2003, in which attackers gained access to military intelligence networks at organizations such as Lockheed Martin, Sandia National Laboratories, Redstone Arsenal, and NASA [4]. Direct military targets were also included in the assault, such as the U.S. Army Information Systems Engineering Command at Fort Huachuca, Arizona, the Defense Information Systems Agency in Arlington, Virginia, the Naval Ocean Systems Center, a Defense Department installation in San Diego, California, and the U.S. Army Space and Strategic Defense installation in Huntsville, Alabama [5].
These ongoing attacks are labeled "Advanced Persistent Threats" or "APT" by the American Military, are considered acts of war by both the White House [6] and the Department of Defense [7] as far back as 2011, and are not unique to the Chinese origins. You may remember the 2007 attacks on Estonia [8], which has been attributed to entities within Russian territory operating with the assistance of the Russian government [9]. These attacks disabled a wide array of Estonian government sites, rendering services in the world's most digitally-connected country unusable. The attacks also disabled ATM machines, effectively disabling some portion of the Estonian economy.
The United States [and arguably Israel, [10]] have also been actively participating in these attacks [11] with the deploying of FLAME and Stuxnet against Iran, which made international headlines this past year when the coordinated efforts of the tools were used to disable Iranian nuclear centrifuges in an attempt to slow their progress in their nuclear program [12]. These efforts are ongoing, with the latest addition of the Gauss and Duqu malwares [13] continuing to target middle-eastern countries.
“From his first months in office, President Obama secretly ordered increasingly sophisticated attacks on the computer systems that run Iran’s main nuclear enrichment facilities, significantly expanding America’s first sustained use of cyberweapons, according to participants in the program.” — +The New York Times
Obama reportedly went on to sign a classified directive last year [14] enabling the government to seize control of private networks, and the 2012 NDAA (National Defense Authorization Act) includes terms [15, section 954] that authorize offensive attacks on foreign threats [16]. The official United States policy already is to deem any cyberattack on the U.S. as an "act of war" [17], and it looks like these types of actions and attacks have already been made legal.
While it may once have been a subject of fiction [18], it's now and has been a harsh reality that we're in the middle of a new era in warfare, and the battles are already well-underway as countries around the world are openly engaging in offensive attacks on one another that are impacting economies on a massive scale. I don't know what else to call this other than a world war—even the CIA's Center for the Study of Intelligence (CSI) predicted this [19], as have many others even earlier [20].
Here's a thought; if our constitution gives us the right to bear arms, and the government deems these types of attacks as acts of war, then isn't it our right to keep and bear these arms? Yet another case for a mass-algorate society [21], which Mr. Obama appears to agree with me on [22], at the very least.
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
On the Ongoing Attacks between China, U.S., Russia, Israel, etc.… The latest round of evidence of ongoing digital warfare between the superpowers is now being reported in the N.Y. Times [1] after an undeniably incriminating 60-page report on the Chinese attacks on the U.S. by security firm Mandiant [2].
“Either they are coming from inside Unit 61398, or the people who run the most-controlled, most-monitored Internet networks in the world are clueless about thousands of people generating attacks from this one neighborhood.” — Kevin Mandia
The report goes on to track individual participants in the attack, tracing them back to the headquarters of P.L.A. Unit 61398.
Attacks from the Chinese have been ongoing for many years, notably back to Operation Titan Rain [3] in 2003, in which attackers gained access to military intelligence networks at organizations such as Lockheed Martin, Sandia National Laboratories, Redstone Arsenal, and NASA [4]. Direct military targets were also included in the assault, such as the U.S. Army Information Systems Engineering Command at Fort Huachuca, Arizona, the Defense Information Systems Agency in Arlington, Virginia, the Naval Ocean Systems Center, a Defense Department installation in San Diego, California, and the U.S. Army Space and Strategic Defense installation in Huntsville, Alabama [5].
These ongoing attacks are labeled "Advanced Persistent Threats" or "APT" by the American Military, are considered acts of war by both the White House [6] and the Department of Defense [7] as far back as 2011, and are not unique to the Chinese origins. You may remember the 2007 attacks on Estonia [8], which has been attributed to entities within Russian territory operating with the assistance of the Russian government [9]. These attacks disabled a wide array of Estonian government sites, rendering services in the world's most digitally-connected country unusable. The attacks also disabled ATM machines, effectively disabling some portion of the Estonian economy.
The United States [and arguably Israel, [10]] have also been actively participating in these attacks [11] with the deploying of FLAME and Stuxnet against Iran, which made international headlines this past year when the coordinated efforts of the tools were used to disable Iranian nuclear centrifuges in an attempt to slow their progress in their nuclear program [12]. These efforts are ongoing, with the latest addition of the Gauss and Duqu malwares [13] continuing to target middle-eastern countries.
“From his first months in office, President Obama secretly ordered increasingly sophisticated attacks on the computer systems that run Iran’s main nuclear enrichment facilities, significantly expanding America’s first sustained use of cyberweapons, according to participants in the program.” — +The New York Times
Obama reportedly went on to sign a classified directive last year [14] enabling the government to seize control of private networks, and the 2012 NDAA (National Defense Authorization Act) includes terms [15, section 954] that authorize offensive attacks on foreign threats [16]. The official United States policy already is to deem any cyberattack on the U.S. as an "act of war" [17], and it looks like these types of actions and attacks have already been made legal.
While it may once have been a subject of fiction [18], it's now and has been a harsh reality that we're in the middle of a new era in warfare, and the battles are already well-underway as countries around the world are openly engaging in offensive attacks on one another that are impacting economies on a massive scale. I don't know what else to call this other than a world war—even the CIA's Center for the Study of Intelligence (CSI) predicted this [19], as have many others even earlier [20].
Here's a thought; if our constitution gives us the right to bear arms, and the government deems these types of attacks as acts of war, then isn't it our right to keep and bear these arms? Yet another case for a mass-algorate society [21], which Mr. Obama appears to agree with me on [22], at the very least.
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
On the Ongoing Attacks between China, U.S., Russia, Israel, etc.… The latest round of evidence of ongoing digital warfare between the superpowers is now being reported in the N.Y. Times [1] after an undeniably incriminating 60-page report on the Chinese attacks on the U.S. by security firm Mandiant [2].
“Either they are coming from inside Unit 61398, or the people who run the most-controlled, most-monitored Internet networks in the world are clueless about thousands of people generating attacks from this one neighborhood.” — Kevin Mandia
The report goes on to track individual participants in the attack, tracing them back to the headquarters of P.L.A. Unit 61398.
Attacks from the Chinese have been ongoing for many years, notably back to Operation Titan Rain [3] in 2003, in which attackers gained access to military intelligence networks at organizations such as Lockheed Martin, Sandia National Laboratories, Redstone Arsenal, and NASA [4]. Direct military targets were also included in the assault, such as the U.S. Army Information Systems Engineering Command at Fort Huachuca, Arizona, the Defense Information Systems Agency in Arlington, Virginia, the Naval Ocean Systems Center, a Defense Department installation in San Diego, California, and the U.S. Army Space and Strategic Defense installation in Huntsville, Alabama [5].
These ongoing attacks are labeled "Advanced Persistent Threats" or "APT" by the American Military, are considered acts of war by both the White House [6] and the Department of Defense [7] as far back as 2011, and are not unique to the Chinese origins. You may remember the 2007 attacks on Estonia [8], which has been attributed to entities within Russian territory operating with the assistance of the Russian government [9]. These attacks disabled a wide array of Estonian government sites, rendering services in the world's most digitally-connected country unusable. The attacks also disabled ATM machines, effectively disabling some portion of the Estonian economy.
The United States [and arguably Israel, [10]] have also been actively participating in these attacks [11] with the deploying of FLAME and Stuxnet against Iran, which made international headlines this past year when the coordinated efforts of the tools were used to disable Iranian nuclear centrifuges in an attempt to slow their progress in their nuclear program [12]. These efforts are ongoing, with the latest addition of the Gauss and Duqu malwares [13] continuing to target middle-eastern countries.
“From his first months in office, President Obama secretly ordered increasingly sophisticated attacks on the computer systems that run Iran’s main nuclear enrichment facilities, significantly expanding America’s first sustained use of cyberweapons, according to participants in the program.” — +The New York Times
Obama reportedly went on to sign a classified directive last year [14] enabling the government to seize control of private networks, and the 2012 NDAA (National Defense Authorization Act) includes terms [15, section 954] that authorize offensive attacks on foreign threats [16]. The official United States policy already is to deem any cyberattack on the U.S. as an "act of war" [17], and it looks like these types of actions and attacks have already been made legal.
While it may once have been a subject of fiction [18], it's now and has been a harsh reality that we're in the middle of a new era in warfare, and the battles are already well-underway as countries around the world are openly engaging in offensive attacks on one another that are impacting economies on a massive scale. I don't know what else to call this other than a world war—even the CIA's Center for the Study of Intelligence (CSI) predicted this [19], as have many others even earlier [20].
Here's a thought; if our constitution gives us the right to bear arms, and the government deems these types of attacks as acts of war, then isn't it our right to keep and bear these arms? Yet another case for a mass-algorate society [21], which Mr. Obama appears to agree with me on [22], at the very least.
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.
On the Ongoing Attacks between China, U.S., Russia, Israel, etc.… The latest round of evidence of ongoing digital warfare between the superpowers is now being reported in the N.Y. Times [1] after an undeniably incriminating 60-page report on the Chinese attacks on the U.S. by security firm Mandiant [2].
“Either they are coming from inside Unit 61398, or the people who run the most-controlled, most-monitored Internet networks in the world are clueless about thousands of people generating attacks from this one neighborhood.” — Kevin Mandia
The report goes on to track individual participants in the attack, tracing them back to the headquarters of P.L.A. Unit 61398.
Attacks from the Chinese have been ongoing for many years, notably back to Operation Titan Rain [3] in 2003, in which attackers gained access to military intelligence networks at organizations such as Lockheed Martin, Sandia National Laboratories, Redstone Arsenal, and NASA [4]. Direct military targets were also included in the assault, such as the U.S. Army Information Systems Engineering Command at Fort Huachuca, Arizona, the Defense Information Systems Agency in Arlington, Virginia, the Naval Ocean Systems Center, a Defense Department installation in San Diego, California, and the U.S. Army Space and Strategic Defense installation in Huntsville, Alabama [5].
These ongoing attacks are labeled "Advanced Persistent Threats" or "APT" by the American Military, are considered acts of war by both the White House [6] and the Department of Defense [7] as far back as 2011, and are not unique to the Chinese origins. You may remember the 2007 attacks on Estonia [8], which has been attributed to entities within Russian territory operating with the assistance of the Russian government [9]. These attacks disabled a wide array of Estonian government sites, rendering services in the world's most digitally-connected country unusable. The attacks also disabled ATM machines, effectively disabling some portion of the Estonian economy.
The United States [and arguably Israel, [10]] have also been actively participating in these attacks [11] with the deploying of FLAME and Stuxnet against Iran, which made international headlines this past year when the coordinated efforts of the tools were used to disable Iranian nuclear centrifuges in an attempt to slow their progress in their nuclear program [12]. These efforts are ongoing, with the latest addition of the Gauss and Duqu malwares [13] continuing to target middle-eastern countries.
“From his first months in office, President Obama secretly ordered increasingly sophisticated attacks on the computer systems that run Iran’s main nuclear enrichment facilities, significantly expanding America’s first sustained use of cyberweapons, according to participants in the program.” — +The New York Times
Obama reportedly went on to sign a classified directive last year [14] enabling the government to seize control of private networks, and the 2012 NDAA (National Defense Authorization Act) includes terms [15, section 954] that authorize offensive attacks on foreign threats [16]. The official United States policy already is to deem any cyberattack on the U.S. as an "act of war" [17], and it looks like these types of actions and attacks have already been made legal.
While it may once have been a subject of fiction [18], it's now and has been a harsh reality that we're in the middle of a new era in warfare, and the battles are already well-underway as countries around the world are openly engaging in offensive attacks on one another that are impacting economies on a massive scale. I don't know what else to call this other than a world war—even the CIA's Center for the Study of Intelligence (CSI) predicted this [19], as have many others even earlier [20].
Here's a thought; if our constitution gives us the right to bear arms, and the government deems these types of attacks as acts of war, then isn't it our right to keep and bear these arms? Yet another case for a mass-algorate society [21], which Mr. Obama appears to agree with me on [22], at the very least.
Replies are automatically detected from social media, including Twitter, Facebook, and Google+. To add a comment, include a direct link to this post in your message and it'll show up here within a few minutes.